2 - Deep Autofocus with CBCT Consistency Constraints [ID:12825]
50 von 129 angezeigt

So welcome to my PRS talk. This is my sixth PRS talk. I'm a member of the lab since April

2017 and today I'm going to talk with you about my BVM paper which will be presented

this year at the BVM which is all focused with comb-BMCT consistency constraints.

So when we want to reconstruct tomographic images from projection images what we need

is an exact information about the geometry so we need to know exactly from which orientation

and position the view was acquired and we're going to see later tomorrow how to do this

by Tobias in another approach. But when this geometry information is not fulfilled

because the patient has moved then we get some artifacts right so we can see here that

we have those three artifacts and that's because the patient has moved and then the information

that we have acquired on our detector is getting smeared back to wrong locations.

And what we can see is the bigger the patient movement so the bigger the misalignment the

bigger the artifacts here are. And this idea can be used to actually compensate for motion

and this is commonly known as the autofocus concept where we devise an image quality metric

so this is a single number that kind of summarizes how well our image is looking like and then

we optimize for a single number and find the geometry that actually does a well job on

this image quality metric. This idea was first presented in the field

of MR motion compensation and what they used is the histogram entropy and the idea behind

histogram entropy and total variation is pretty similar to what we do in iterative reconstruction.

We assume that we have homogeneous objects and if there is some motion in those homogeneous

regions they will be distorted by those motion blurs and those double edges and then the

gray values of the histogram will get more randomly because they're not those distinct

points and the same for total variation we will get more gradients and this will either

increase our entropy or increase our total variation.

Those features are handcrafted and our ideas now instead of using those handcrafted features

that basically tells us how a motion free reconstruction must look like we learn an

image quality metric that not only tells us how the motion free state looks like but also

can kind of handle different states of motion so that we can find the optimum from a motion

corrupt scan. So what we need for this is a base measure

so a way to actually express motion and this is what we and there we use the reprojection

error. So the reprojection error this is a kind of distance measure between two projections

and it measures the reconstruction relevant deviation so if we have a point here and we

project it with two projection geometries P1 and P2 then we get a distance on the detector

and this is basically the reconstruction relevant deviation because a movement along this line

would not affect our reprojection error. So we then train a network to actually predict

the reprojection error from a reconstructed image so we can generate our training data

ourselves and the upper part is how we generate our training data so we start with a stack

of projections then we so this is the motion free scan then we manually modulate or yeah

by kind of we randomly modulate the geometry to have some motion artifacts and from this

motion artifact or from this modulated trajectory we actually can compute our reprojection error

so we have then our motion trajectory our reprojection error then we can reconstruct

it and then we have a reconstruction where we know exactly how the reprojection error

is and this is what we do a lot of times so we generate a lot of training data from a

patient and then we put this into a network and the network tries to request the reprojection

error out of this. So how we do is we use a 33 layer residual network at the end of

fully connected layer our input is a 512 by 512 images we use the root mean square error

because we actually so our our labels are perfectly right because the reprojection

error is perfectly right we use 20 clinical commem CT acquisitions and for each patient

we modulate 450 450 different motion shapes so this gives us about 7000 reconstructions

for training 900 for validation approximately 450 for testing. So the next thing is how

well does our network predict it so the better we can predict this reprojection error the

Teil einer Videoserie :

Presenters

M. Sc. Alexander Preuhs M. Sc. Alexander Preuhs

Zugänglich über

Offener Zugang

Dauer

00:13:00 Min

Aufnahmedatum

2020-02-17

Hochgeladen am

2020-02-17 12:39:46

Sprache

en-US

High quality reconstruction with interventional C-arm cone-beam computed tomography (CBCT) requires exact geometry information. If the geometry information is corrupted, e. g., by unexpected patient or system movement, the measured signal is misplaced in the backprojection operation. With prolonged acquisition times of interventional C-arm CBCT the likelihood of rigid patient motion increases. To adapt the backprojection operation accordingly, a motion estimation strategy is necessary. Recently, a novel learning-based approach was proposed, capable of compensating motions within the acquisition plane. We extend this method by a CBCT consistency constraint, which was proven to be efficient for motions perpendicular to the acquisition plane. By the synergistic combination of these two measures, in and out-plane motion is well detectable, achieving an average artifact suppression of 93 %. This outperforms the entropy-based state-of-the-art autofocus measure which achieves on average an artifact suppression of 54 %.

Tags

deep error features data motion conclusion quality anatomical amplitudes generalizes manipulation estimated outperforms reprojection healthineers autofocus crafted integrity investigations
Einbetten
Wordpress FAU Plugin
iFrame
Teilen